skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Guo, Z"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Free, publicly-accessible full text available September 8, 2026
  2. Free, publicly-accessible full text available April 28, 2026
  3. Free, publicly-accessible full text available April 28, 2026
  4. Free, publicly-accessible full text available January 6, 2026
  5. Predicting future walking joint kinematics is crucial for assistive device control, especially in variable walking environments. Traditional optical motion capture systems provide kinematics data but require laborious post-processing, whereas IMU based systems provide direct calculations but add delays due to data collection and algorithmic processes. Predicting future kinematics helps to compensate for these delays, enabling the system real-time. Furthermore, these predicted kinematics could serve as target trajectories for assistive devices such as exoskeletal robots and lower limb prostheses. However, given the complexity of human mobility and environmental factors, this prediction remains to be challenging. To address this challenge, we propose the Dual-ED-Attention-FAM-Net, a deep learning model utilizing two encoders, two decoders, a temporal attention module, and a feature attention module. Our model outperforms the state-of-the-art LSTM model. Specifically, for Dataset A, using IMUs and a combination of IMUs and videos, RMSE values decrease from 4.45° to 4.22° and from 4.52° to 4.15°, respectively. For Dataset B, IMUs and IMUs combined with pressure insoles result in RMSE reductions from 7.09° to 6.66° and from 7.20° to 6.77°, respectively. Additionally, incorporating other modalities alongside IMUs helps improve the performance of the model. 
    more » « less
  6. Large Language Models (LLMs) with strong abilities in natural language processing tasks have emerged and have been applied in various kinds of areas such as science, finance and software engineering. However, the capability of LLMs to advance the field of chemistry remains unclear. In this paper, rather than pursuing state-of-the-art performance, we aim to evaluate capabilities of LLMs in a wide range of tasks across the chemistry domain. We identify three key chemistryrelated capabilities including understanding, reasoning and explaining to explore in LLMs and establish a benchmark containing eight chemistry tasks. Our analysis draws on widely recognized datasets facilitating a broad exploration of the capacities of LLMs within the context of practical chemistry. Five LLMs (GPT-4, GPT-3.5, Davinci-003, Llama and Galactica) are evaluated for each chemistry task in zero-shot and few-shot in-context learning settings with carefully selected demonstration examples and specially crafted prompts. Our investigation found that GPT-4 outperformed other models and LLMs exhibit different competitive levels in eight chemistry tasks. In addition to the key findings from the comprehensive benchmark analysis, our work provides insights into the limitation of current LLMs and the impact of in-context learning settings on LLMs’ performance across various chemistry tasks. The code and datasets used in this study are available at https://github.com/ChemFoundationModels/ChemLLMBench. 
    more » « less